增强学习算法通常需要马尔可夫决策过程(MDP)中的状态和行动空间的有限度,并且在文献中已经对连续状态和动作空间的这种算法的适用性进行了各种努力。在本文中,我们表明,在非常温和的规律条件下(特别是仅涉及MDP的转换内核的弱连续性),通过量化状态和动作会聚到限制,Q-Learning用于标准BOREL MDP,而且此外限制满足最优性方程,其导致与明确的性能界限接近最优性,或者保证渐近最佳。我们的方法在(i)上建立了(i)将量化视为测量内核,因此将量化的MDP作为POMDP,(ii)利用Q-Learning的Q-Learning的近的最优性和收敛结果,并最终是有限状态的近最优态模型近似用于MDP的弱连续内核,我们展示对应于构造POMDP的固定点。因此,我们的论文提出了一种非常一般的收敛性和近似值,了解Q-Learning用于连续MDP的适用性。
translated by 谷歌翻译
In this paper, we introduce a regularized mean-field game and study learning of this game under an infinite-horizon discounted reward function. Regularization is introduced by adding a strongly concave regularization function to the one-stage reward function in the classical mean-field game model. We establish a value iteration based learning algorithm to this regularized mean-field game using fitted Q-learning. The regularization term in general makes reinforcement learning algorithm more robust to the system components. Moreover, it enables us to establish error analysis of the learning algorithm without imposing restrictive convexity assumptions on the system components, which are needed in the absence of a regularization term.
translated by 谷歌翻译
We consider learning approximate Nash equilibria for discrete-time mean-field games with nonlinear stochastic state dynamics subject to both average and discounted costs. To this end, we introduce a mean-field equilibrium (MFE) operator, whose fixed point is a mean-field equilibrium (i.e. equilibrium in the infinite population limit). We first prove that this operator is a contraction, and propose a learning algorithm to compute an approximate mean-field equilibrium by approximating the MFE operator with a random one. Moreover, using the contraction property of the MFE operator, we establish the error analysis of the proposed learning algorithm. We then show that the learned mean-field equilibrium constitutes an approximate Nash equilibrium for finite-agent games.
translated by 谷歌翻译